3,737 research outputs found

    Why are events important and how to compute them in geospatial research?

    Get PDF
    Geospatial research has long centered around objects. While attention to events is growing rapidly, events remain objectified in spatial databases. This paper aims to highlight the importance of events in scientific inquiries and overview general event-based approaches to data modeling and computing. As machine learning algorithms and big data become popular in geospatial research, many studies appear to be the products of convenience with readily adaptable data and codes rather than curiosity. By asking why events are important and how to compute events in geospatial research, the author intends to provoke thinking into the rationale and conceptual basis of event-based modeling and to emphasize the epistemological role of events in geospatial information science. Events are essential to understanding the world and communicating the understanding, events provide points of entry for knowledge inquiries and the inquiry processes, and events mediate objects and scaffold causality. We compute events to improve understanding, but event computing and computability depend on event representation. The paper briefly reviews event-based data models in spatial databases and methods to compute events for site understanding and prediction, for spatial impact assessment, and for discovering events\u27 dynamic structures. Concluding remarks summarize key arguments and comment on opportunities to extend event computability

    From Temporal GIS to Dynamics GIS

    Get PDF
    Platinum Sponsors * KU Department of Geography * KU Institute for Policy & Social Research Gold Sponsors * State of Kansas Data Access and Support Center (DASC) * KU Libraries GIS and Scholar Services * KU Transportation Research Institute * Wilson & Company (formerly Western Air Maps) Silver Sponsors * Bartlett & West * KU Biodiversity Institute * KansasView Consortium Bronze Sponsors * Coca-Cola * Kansas Biological Survey/Kansas Applied Remote Sensing (KARS) * KU Center for Remote Sensing of Ice Sheets (CReSIS) * KU Department of Civil, Environmental and Architectural Engineering * Black & Veatch * Spatial Data Research * AECOM * MJ Harde

    New results on multidimensional Chinese remainder theorem

    Get PDF
    The Chinese remainder theorem (CRT) [McClellan and Rader 1979] has been well known for applications in fast DFT computations and computer arithmetic. Guessoum and Mersereau [1986] first made headway in extending the CRT to multidimensional (MD) nonseparable systems and showing its usefulness. The present letter generalize the result and present a more general form. This more general MDCRT is an exact counterpart of 1DCRT

    Discrete multitone modulation with principal component filter banks

    Get PDF
    Discrete multitone (DMT) modulation is an attractive method for communication over a nonflat channel with possibly colored noise. The uniform discrete Fourier transform (DFT) filter bank and cosine modulated filter bank have in the past been used in this system because of low complexity. We show in this paper that principal component filter banks (PCFB) which are known to be optimal for data compression and denoising applications, are also optimal for a number of criteria in DMT modulation communication. For example, the PCFB of the effective channel noise power spectrum (noise psd weighted by the inverse of the channel gain) is optimal for DMT modulation in the sense of maximizing bit rate for fixed power and error probabilities. We also establish an optimality property of the PCFB when scalar prefilters and postfilters are used around the channel. The difference between the PCFB and a traditional filter bank such as the brickwall filter bank or DFT filter bank is significant for effective power spectra which depart considerably from monotonicity. The twisted pair channel with its bridged taps, next and fext noises, and AM interference, therefore appears to be a good candidate for the application of a PCFB. This is demonstrated with the help of numerical results for the case of the ADSL channel

    Chapter 8 A High-Resolution Multi-Scalar Approach for Micro-Mapping Historical Landscapes in Transition

    Get PDF
    "The relational complexity of urban and rural landscapes in space and in time. The development of historical geographical information systems (HGIS) and other methods from the digital humanities have revolutionised historical research on cultural landscapes. Additionally, the opening up of increasingly diverse collections of source material, often incomplete and difficult to interpret, has led to methodologically innovative experiments. One of today’s major challenges, however, concerns the concepts and tools to be deployed for mapping processes of transformation—that is, interpreting and imagining the relational complexity of urban and rural landscapes, both in space and in time, at micro- and macro-scale. Mapping Landscapes in Transformation gathers experts from different disciplines, active in the fields of historical geography, urban and landscape history, archaeology and heritage conservation. They are specialised in a wide variety of space-time contexts, including regions within Europe, Asia, and the Americas, and periods from antiquity to the 21st century.

    Community health assessment using self-organizing maps and geographic information systems

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>From a public health perspective, a healthier community environment correlates with fewer occurrences of chronic or infectious diseases. Our premise is that community health is a non-linear function of environmental and socioeconomic effects that are not normally distributed among communities. The objective was to integrate multivariate data sets representing social, economic, and physical environmental factors to evaluate the hypothesis that communities with similar environmental characteristics exhibit similar distributions of disease.</p> <p>Results</p> <p>The SOM algorithm used the intrinsic distributions of 92 environmental variables to classify 511 communities into five clusters. SOM determined clusters were reprojected to geographic space and compared with the distributions of several health outcomes. ANOVA results indicated that the variability between community clusters was significant with respect to the spatial distribution of disease occurrence.</p> <p>Conclusion</p> <p>Our study demonstrated a positive relationship between environmental conditions and health outcomes in communities using the SOM-GIS method to overcome data and methodological challenges traditionally encountered in public health research. Results demonstrated that community health can be classified using environmental variables and that the SOM-GIS method may be applied to multivariate environmental health studies.</p

    Optimizing the capacity of orthogonal and biorthogonal DMT channels

    Get PDF
    The uniform DFT filter bank has been used routinely in discrete multitone modulation (DMT) systems because of implementation efficiency. It has recently been shown that principal component filter banks (PCFB) which are known to be optimal for data compression and denoising applications, are also optimal for a number of criteria in DMT communication. In this paper we show that such filter banks are optimal even when scalar prefilters and postfilters are used around the channel. We show that the theoretically optimum scalar prefilter is the half-whitening solution, well known in data compression theory. We conclude with the observation that the PCFB continues to be optimal for the maximization of theoretical capacity as well

    Elementary Psychology (University of Georgia)

    Get PDF
    This Grants Collection for Elementary Psychology was created under a Round Two ALG Textbook Transformation Grant. Affordable Learning Georgia Grants Collections are intended to provide faculty with the frameworks to quickly implement or revise the same materials as a Textbook Transformation Grants team, along with the aims and lessons learned from project teams during the implementation process. Documents are in .pdf format, with a separate .docx (Word) version available for download. Each collection contains the following materials: Linked Syllabus Initial Proposal Final Reporthttps://oer.galileo.usg.edu/psychology-collections/1003/thumbnail.jp

    Development and validation of case-finding algorithms for recurrence of breast cancer using routinely collected administrative data

    Get PDF
    Introduction Recurrence free survival is frequently investigated in cancer outcome studies, however is not explicitly documented in cancer registry data that is widely used for research. Patterns of events after initial treatment such as oncology visits, re-operation, chemotherapy or radiation may herald recurrence. Objectives and Approach This study aimed to develop and validate algorithms for identifying breast cancer recurrence using large administrative data.Two cohorts with high recurrence rates were used: 1) all young (≤ 40 years) breast cancer patients (2007-2010), and 2) all neoadjuvant chemotherapy patients (2012-2014) in Alberta, Canada. Health events after primary treatment were obtained from the Alberta cancer registry, physician billing claims, and vital statistics databases. Positive recurrence status (defined as either locoregional, distant or both) was ascertained by primary chart review. The cohort was divided into a developing (60%) and validating (40%) set. Development of algorithms geared towards high sensitivity, PPV and accuracy respectively were performed using classification and regression tree (CART) models. Key variables in the models included: a new round of chemotherapy, a second mastectomy, and a new cluster of radiologist, oncologist or general surgeon visits occurring after the primary treatment. Compared with chart review data, the sensitivity, specificity, PPV, NPV and accuracy of the algorithms were calculated. Results Of 606 patients, 121 (20%) had recurrence after a median follow-up 4 years. The high sensitivity algorithm had 94.2% (95% CI: 90.1-98.4%) sensitivity, 92.8% (90.5-95.1%) specificity, 76.5% (70.0-88.3%) PPV, 98.5% (97.3-99.6%) NPV and 93.1% (91.0-95.1%) accuracy. The high PPV algorithm had 74.4% (66.6-82.2%) sensitivity, 97.8% (96.5-99.2%) specificity, 90.0% (84.1-95.9%) PPV, 93.6% (91.4-95.7%) NPV and 92.9% (90.9-95.0%) accuracy. The high accuracy algorithm had 88.4% (82.7-94.1%) sensitivity, 97.1% (95.6-98.6%) specificity, 88.4% (82.7-94.1%) PPV, 97.1% (95.6-98.6%) NPV and 95.4% (93.7-97.1%) accuracy. Conclusion/Implications The proposed algorithms achieved favourably high validity for identifying recurrence using widely available administrative data. Further study may be needed for improving sensitivity and PPV, and validating the algorithms in larger data for widespread use
    corecore